class name
- Europe > Italy > Calabria > Catanzaro Province > Catanzaro (0.04)
- Asia > Middle East > Israel > Jerusalem District > Jerusalem (0.04)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (0.93)
Supplementary Materials: In-Context Impersonation Reveals Large Language Models' Strengths and Biases
Leonard Salewski, Stephan Alaniz, Isabel Rio-Torto, Eric Schulz, Zeynep Akata
Reveals Large Language Models' Strengths and Biases In this supplementary materials we show additional results mentioned in the main paper. First, we give experimental details in Section A. Next, we show results for Llama 2 on the bandit task in Section B. Afterwards, we show in Section C.1 additional quantitative results for the expertise-based Section D provides additional details about the vision and language tasks. For more details on the code please refer to the README.md Section A.1) and the amount of compute required to reproduce our experiments (Section Section A.2) A.1 Prompt variations generated by meta-prompting Work done whilst visiting University of Tübingen 37th Conference on Neural Information Processing Systems (NeurIPS 2023). For all Vicuna-13B based experiments (bandit, reasoning and vision) we used a single Nvidia A100-40GB GPU.
Neural Priming for Sample-Efficient Adaptation Matthew Wallingford Vivek Ramanujan Alex Fang Aditya Kusupati
Presented with class names or unlabeled test samples, Neural Priming enables the model to recall and conditions its parameters on relevant data seen throughout pretraining, thereby priming it for the test distribution. Neural Priming can be performed at inference, even for pretraining datasets as large as LAION-2B. Performing lightweight updates on the recalled data significantly improves accuracy across a variety of distribution shift and transfer learning benchmarks.
- North America > United States > Maryland > Baltimore (0.04)
- North America > Canada > British Columbia > Vancouver (0.04)
- Asia > Middle East > Israel (0.04)
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- North America > United States > New Mexico > Doña Ana County > Las Cruces (0.04)
- Europe > Switzerland > Zürich > Zürich (0.04)
- Europe > Germany > Baden-Württemberg > Tübingen Region > Tübingen (0.04)
- Research Report > Experimental Study (0.93)
- Overview (0.93)
- Information Technology (0.46)
- Transportation > Ground > Road (0.46)
Open Vocabulary 3D Occupancy Prediction from Images Supplementary Material
In this supplementary material, we first give additional details about the method in Sec. 1. Queries used for zero-shot semantic segmentation. We do this for all the annotated classes in the dataset (second column). One can see that, for example, class name'manmade' lacks descriptive specificity. In the text description of this class, we can find "... buildings, walls, guard rails, fences, poles, street signs, traffic lights ..." and more. Table 1: Queries used for zero-shot semantic segmentation.
- Transportation > Passenger (1.00)
- Transportation > Ground > Road (1.00)
- Automobiles & Trucks (0.95)
- South America (0.04)
- Oceania > Australia (0.04)
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- (3 more...)
Learning Mask-aware CLIP Representations for Zero-Shot Segmentation (Supplementary material) Anonymous Author(s) Affiliation Address email
In the supplementary material, we first introduce technical details of the "frozen CLIP" approaches in Sec. 1. Then the dataset settings are shown in Sec. 2. Figure 1 presents an overview of the "frozen CLIP" approach. It's worth noting that all sub-images are resized to Figure 2: Comparison among three merge operations. Pascal-VOC, COCO-Stuff and ADE20K, to evaluate the performance of MAFT. Pascal-VOC: There are 10582 images for training and 1,449 images for testing. ADE20K: ADE20K contains 25k images for training and 2k images for validation. Pascal-Context is an extensive dataset of Pascal-VOC 2010.
- Europe > Poland (0.04)
- Asia > Middle East > Jordan (0.04)